# Creative writing optimization

Qwq 32B ArliAI RpR V4 GGUF
Apache-2.0
A text generation model based on Qwen/QwQ-32B, specializing in role-playing and creative writing tasks, supporting ultra-low bit quantization and long dialogue processing.
Large Language Model Transformers English
Q
Mungert
523
2
Qwen3 4B Mishima Imatrix GGUF
Apache-2.0
A Mishima Imatrix quantized version based on Qwen3-4B, enhanced with specific datasets for prose-style generation
Large Language Model
Q
DavidAU
105
2
Qwen3 32B GGUF
Apache-2.0
Qwen3-32B is a large language model developed by the Qwen team, supporting a context length of 131,072 tokens with strong capabilities in mathematics, programming, and commonsense reasoning.
Large Language Model
Q
lmstudio-community
56.66k
7
Qwen3 1.7B GGUF
Qwen3-1.7B is a 1.7B-parameter large language model developed by Qwen, supporting 32k tokens of ultra-long context, excelling in creative writing, role-playing, and multi-turn dialogues.
Large Language Model
Q
lmstudio-community
13.32k
3
Qwq 32B ArliAI RpR V3
Apache-2.0
QwQ-32B-ArliAI-RpR-v3 is a fine-tuned reasoning model based on QwQ-32B, specializing in role-playing and creative writing, with outstanding creativity and long dialogue coherence.
Large Language Model Transformers English
Q
ArliAI
497
37
Synthia S1 27b Bnb 4bit
Synthia-S1-27b is an advanced reasoning AI model developed by Tesslate AI, focusing on logical reasoning, coding, and role-playing tasks.
Text-to-Image Transformers
S
GusPuffy
858
1
Gemma 3 Glitter 27B
A creative writing model based on Gemma 3 27B, composed of a 50% mix of 27B IT and 27B PT, excelling in natural and fluent text expression
Large Language Model Transformers
G
allura-org
112
4
Llama 3.x 70b Hexagon Purple V2
Hexagon Purple V2 is a three-tier standard merged model based on Smartracks, incorporating capabilities from Deepseek Distill R1, Nemotron, and Tulu, with performance optimized through multi-model merging.
Large Language Model Transformers
L
Nexesenex
417
2
Qwen2.5 QwQ 37B Eureka Triple Cubed
Apache-2.0
An enhanced version of QwQ-32B, improving reasoning and output capabilities through 'cubed' and 'triple-cubed' methods, supporting 128k context.
Large Language Model Transformers Other
Q
DavidAU
210
5
EVA Qwen2.5 72B V0.2
Other
A large language model fine-tuned based on Qwen2.5-72B, specializing in text generation and instruction-following tasks
Large Language Model Transformers
E
EVA-UNIT-01
392
19
Darksapling V2 Ultra Quality 7B GGUF
Apache-2.0
A version completely remerged and remade based on the Dark Sapling V2 7B model, with a 32k context length, featuring ultra-high quality and 32-bit improvement
Large Language Model English
D
DavidAU
385
3
Bagel 34b V0.2
Apache-2.0
An experimental fine-tuned model based on yi-34b-200k, suitable for creative writing, role-playing, and other tasks, without DPO stage applied.
Large Language Model Transformers
B
jondurbin
265
41
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase